We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner. To this end, we represent both the bidirectional reflectance distribution function (BRDF) and the signed distance function (SDF) by multi-layer perceptrons. In order to obtain both the surface shape and its reflectance properties, we revert to a differentiable volume renderer with a physically based illumination model that allows us to decouple reflectance and lighting. This physical model takes into account the effect of the camera's point spread function thereby enabling a reconstruction of shape and material in a super-resolution quality. Experimental validation confirms that SupeRVol achieves state of the art performance in terms of inverse rendering quality. It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Quantifying the perceptual similarity of two images is a long-standing problem in low-level computer vision. The natural image domain commonly relies on supervised learning, e.g., a pre-trained VGG, to obtain a latent representation. However, due to domain shift, pre-trained models from the natural image domain might not apply to other image domains, such as medical imaging. Notably, in medical imaging, evaluating the perceptual similarity is exclusively performed by specialists trained extensively in diverse medical fields. Thus, medical imaging remains devoid of task-specific, objective perceptual measures. This work answers the question: Is it necessary to rely on supervised learning to obtain an effective representation that could measure perceptual similarity, or is self-supervision sufficient? To understand whether recent contrastive self-supervised representation (CSR) may come to the rescue, we start with natural images and systematically evaluate CSR as a metric across numerous contemporary architectures and tasks and compare them with existing methods. We find that in the natural image domain, CSR behaves on par with the supervised one on several perceptual tests as a metric, and in the medical domain, CSR better quantifies perceptual similarity concerning the experts' ratings. We also demonstrate that CSR can significantly improve image quality in two image synthesis tasks. Finally, our extensive results suggest that perceptuality is an emergent property of CSR, which can be adapted to many image domains without requiring annotations.
translated by 谷歌翻译
Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction. In this work, we propose the first topologically and feature-wise accurate metric and loss function for supervised image segmentation, which we term Betti matching. We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute the Betti matching of images. We show that the Betti matching error is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the Betti matching loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance. Our code is available in https://github.com/nstucki/Betti-matching.
translated by 谷歌翻译
统计形状建模旨在捕获给定种群中发生的解剖结构的形状变化。形状模型用于许多任务,例如形状重建和图像分割,但也可以塑造生成和分类。现有的形状先验需要训练示例之间的密集对应,或者缺乏鲁棒性和拓扑保证。我们提出了FlowSM,这是一种新型的形状建模方法,它可以学习形状变异性,而无需在训练实例之间密集的对应关系。它依赖于连续变形流的层次结构,该层次由神经网络参数化。我们的模型优于远端股骨和肝脏在提供表现力和稳健形状方面的最先进方法。我们表明,新兴的潜在表示通过将健康与病理形状分开来歧视。最终,我们从部分数据中证明了其对两个形状重建任务的有效性。我们的源代码公开可用(https://github.com/davecasp/flowssm)。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
人类使用未知的相似性函数在未标记的数据集中天生测量实例之间的距离。距离指标只能作为相似实例信息检索相似性的代理。从人类注释中学习良好的相似性功能可以提高检索的质量。这项工作使用深度度量学习来从很少的大型足球轨迹数据集中学习这些用户定义的相似性功能。我们将基于熵的活跃学习方法从三重矿山开采进行了最新的工作,以收集易于招募的人,但仍来自人类参与者提供信息的注释,并利用它们来训练深度卷积网络,以概括为看不见的样本。我们的用户研究表明,与以前依赖暹罗网络的深度度量学习方法相比,我们的方法提高了信息检索的质量。具体而言,我们通过分析参与者的响应效率来阐明被动抽样启发式方法和主动学习者的优势和缺点。为此,我们收集准确性,算法时间的复杂性,参与者的疲劳和时间响应,定性自我评估和陈述以及混合膨胀注释者的影响及其对模型性能和转移学习的一致性。
translated by 谷歌翻译
光学相干断层扫描血管造影(OCTA)可以非侵入地对眼睛的循环系统进行图像。为了可靠地表征视网膜脉管系统,有必要自动从这些图像中提取定量指标。这种生物标志物的计算需要对血管进行精确的语义分割。但是,基于深度学习的分割方法主要依赖于使用体素级注释的监督培训,这是昂贵的。在这项工作中,我们提出了一条管道,以合成具有本质上匹配的地面真实标签的大量逼真的八颗图像。从而消除了需要手动注释培训数据的需求。我们提出的方法基于两个新的组成部分:1)基于生理的模拟,该模拟对各种视网膜血管丛进行建模和2)基于物理学的图像增强套件,这些图像增强量模拟了八八章图像采集过程,包括典型文物。在广泛的基准测试实验中,我们通过成功训练视网膜血管分割算法来证明合成数据的实用性。在我们方法的竞争性定量和优越的定性性能的鼓励下,我们认为它构成了一种多功能工具,可以推进对八章图像的定量分析。
translated by 谷歌翻译
检测变压器代表基于变压器编码器架构架构的端到端对象检测方法,从而利用了注意机制进行全局关系建模。尽管检测变形金刚在2D自然图像上运行的基于CNN的高度优化的对应物提供的结果与其高度优化的同行提供了结果,但它们的成功与获取大量培训数据紧密相结合。但是,这限制了在医疗领域中使用检测变压器的可行性,因为访问注释数据通常受到限制。为了解决这个问题并促进医疗检测变压器的出现,我们提出了一种新型检测变压器,用于3D解剖结构检测,称为聚焦解码器。集中的解码器利用解剖区域图集的信息同时部署查询锚点,并将跨注意的视野限制为感兴趣的区域,这使得精确地关注相关的解剖结构。我们在两个公开可用的CT数据集上评估了我们提出的方法,并证明了专注的解码器不仅提供了强大的检测结果,从而减轻了对大量注释数据的需求,而且还表现出了通过注意力重量对结果的出色和高度直观的解释。我们的医学视觉变压器库github.com/bwittmann/transoar提供了专注的解码器代码。
translated by 谷歌翻译